Search Results for author: Thomas Icard

Found 15 papers, 4 papers with code

Comparing Causal Frameworks: Potential Outcomes, Structural Models, Graphs, and Abstractions

no code implementations25 Jun 2023 Duligur Ibeling, Thomas Icard

The aim of this paper is to make clear and precise the relationship between the Rubin causal model (RCM) and structural causal model (SCM) frameworks for causal inference.

Causal Inference

Finding Alignments Between Interpretable Causal Variables and Distributed Neural Representations

no code implementations5 Mar 2023 Atticus Geiger, Zhengxuan Wu, Christopher Potts, Thomas Icard, Noah D. Goodman

In DAS, we find the alignment between high-level and low-level models using gradient descent rather than conducting a brute-force search, and we allow individual neurons to play multiple distinct roles by analyzing representations in non-standard bases-distributed representations.

Explainable artificial intelligence

Causal Abstraction for Faithful Model Interpretation

no code implementations11 Jan 2023 Atticus Geiger, Chris Potts, Thomas Icard

A faithful and interpretable explanation of an AI model's behavior and internal structure is a high-level explanation that is human-intelligible but also consistent with the known, but often opaque low-level causal details of the model.

Explainable Artificial Intelligence (XAI)

Causal Abstraction with Soft Interventions

no code implementations22 Nov 2022 Riccardo Massidda, Atticus Geiger, Thomas Icard, Davide Bacciu

Causal abstraction provides a theory describing how several causal models can represent the same system at different levels of detail.

Inducing Causal Structure for Interpretable Neural Networks

2 code implementations1 Dec 2021 Atticus Geiger, Zhengxuan Wu, Hanson Lu, Josh Rozner, Elisa Kreiss, Thomas Icard, Noah D. Goodman, Christopher Potts

In IIT, we (1) align variables in a causal model (e. g., a deterministic program or Bayesian network) with representations in a neural model and (2) train the neural model to match the counterfactual behavior of the causal model on a base input when aligned representations in both models are set to be the value they would be for a source input.

Data Augmentation Natural Language Inference

On the Opportunities and Risks of Foundation Models

3 code implementations16 Aug 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.

Transfer Learning

A Topological Perspective on Causal Inference

no code implementations NeurIPS 2021 Duligur Ibeling, Thomas Icard

This paper presents a topological learning-theoretic perspective on causal inference by introducing a series of topologies defined on general spaces of structural causal models (SCMs).

Causal Inference

Causal Abstractions of Neural Networks

no code implementations NeurIPS 2021 Atticus Geiger, Hanson Lu, Thomas Icard, Christopher Potts

Structural analysis methods (e. g., probing and feature attribution) are increasingly important tools for neural network analysis.

Natural Language Inference

Intention as Commitment toward Time

no code implementations17 Apr 2020 Marc van Zee, Dragan Doder, Leendert van der Torre, Mehdi Dastani, Thomas Icard, Eric Pacuit

The first contribution is a logic for reasoning about intention, time and belief, in which assumptions of intentions are represented by preconditions of intended actions.

Probabilistic Reasoning across the Causal Hierarchy

no code implementations9 Jan 2020 Duligur Ibeling, Thomas Icard

We propose a formalization of the three-tier causal hierarchy of association, intervention, and counterfactuals as a series of probabilistic logical languages.

Bayesian Inference

On Open-Universe Causal Reasoning

no code implementations4 Jul 2019 Duligur Ibeling, Thomas Icard

We extend two kinds of causal models, structural equation models and simulation models, to infinite variable spaces.

On the Conditional Logic of Simulation Models

no code implementations8 May 2018 Duligur Ibeling, Thomas Icard

We propose analyzing conditional reasoning by appeal to a notion of intervention on a simulation program, formalizing and subsuming a number of approaches to conditional thinking in the recent AI literature.

Cannot find the paper you are looking for? You can Submit a new open access paper.