Search Results for author: Tommi S. Jaakkola

Found 41 papers, 14 papers with code

Conformal Language Modeling

1 code implementation16 Jun 2023 Victor Quach, Adam Fisch, Tal Schuster, Adam Yala, Jae Ho Sohn, Tommi S. Jaakkola, Regina Barzilay

Translating this process to conformal prediction, we calibrate a stopping rule for sampling different outputs from the LM that get added to a growing set of candidates until we are confident that the output set is sufficient.

Conformal Prediction Language Modelling +2

PEOPL: Characterizing Privately Encoded Open Datasets with Public Labels

no code implementations31 Mar 2023 Homa Esfahanizadeh, Adam Yala, Rafael G. L. D'Oliveira, Andrea J. D. Jaba, Victor Quach, Ken R. Duffy, Tommi S. Jaakkola, Vinod Vaikuntanathan, Manya Ghobadi, Regina Barzilay, Muriel Médard

Allowing organizations to share their data for training of machine learning (ML) models without unintended information leakage is an open problem in practice.

Adversarial Support Alignment

1 code implementation ICLR 2022 Shangyuan Tong, Timur Garipov, Yang Zhang, Shiyu Chang, Tommi S. Jaakkola

Furthermore, we show that our approach can be viewed as a limit of existing notions of alignment by increasing transportation assignment tolerance.

Domain Adaptation

Understanding Interlocking Dynamics of Cooperative Rationalization

1 code implementation NeurIPS 2021 Mo Yu, Yang Zhang, Shiyu Chang, Tommi S. Jaakkola

The selection mechanism is commonly integrated into the model itself by specifying a two-component cascaded system consisting of a rationale generator, which makes a binary selection of the input features (which is the rationale), and a predictor, which predicts the output based only on the selected features.

Hard Attention

Text Style Transfer with Confounders

no code implementations29 Sep 2021 Tianxiao Shen, Regina Barzilay, Tommi S. Jaakkola

Existing methods for style transfer operate either with paired sentences or distributionally matched corpora which differ only in the desired style.

Style Transfer Text Style Transfer

Trading Coverage for Precision: Conformal Prediction with Limited False Discoveries

no code implementations29 Sep 2021 Adam Fisch, Tal Schuster, Tommi S. Jaakkola, Regina Barzilay

In this paper, we develop a new approach to conformal prediction in which we aim to output a precise set of promising prediction candidates that is guaranteed to contain a limited number of incorrect answers.

Conformal Prediction Drug Discovery

Fundamental Limits and Tradeoffs in Invariant Representation Learning

no code implementations NeurIPS 2023 Han Zhao, Chen Dan, Bryon Aragam, Tommi S. Jaakkola, Geoffrey J. Gordon, Pradeep Ravikumar

A wide range of machine learning applications such as privacy-preserving learning, algorithmic fairness, and domain adaptation/generalization among others, involve learning invariant representations of the data that aim to achieve two competing goals: (a) maximize information or accuracy with respect to a target response, and (b) maximize invariance or independence with respect to a set of protected features (e. g., for fairness, privacy, etc).

Domain Adaptation Fairness +4

Locally Constant Networks

1 code implementation ICLR 2020 Guang-He Lee, Tommi S. Jaakkola

We show how neural models can be used to realize piece-wise constant functions such as decision trees.

Invariant Rationalization

1 code implementation ICML 2020 Shiyu Chang, Yang Zhang, Mo Yu, Tommi S. Jaakkola

Selective rationalization improves neural network interpretability by identifying a small subset of input features -- the rationale -- that best explains or supports the prediction.

Unsupervised Hierarchy Matching with Optimal Transport over Hyperbolic Spaces

no code implementations6 Nov 2019 David Alvarez-Melis, Youssef Mroueh, Tommi S. Jaakkola

This paper focuses on the problem of unsupervised alignment of hierarchical data such as ontologies or lexical databases.

Ontology Matching Word Alignment

Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control

2 code implementations IJCNLP 2019 Mo Yu, Shiyu Chang, Yang Zhang, Tommi S. Jaakkola

Moreover, we explicitly control the rationale complement via an adversary so as not to leave any useful information out of the selection.

A Game Theoretic Approach to Class-wise Selective Rationalization

1 code implementation NeurIPS 2019 Shiyu Chang, Yang Zhang, Mo Yu, Tommi S. Jaakkola

Selection of input features such as relevant pieces of text has become a common technique of highlighting how complex neural predictors operate.

counterfactual Sentiment Analysis +1

Oblique Decision Trees from Derivatives of ReLU Networks

1 code implementation30 Sep 2019 Guang-He Lee, Tommi S. Jaakkola

We show how neural models can be used to realize piece-wise constant functions such as decision trees.

Drug Discovery

Towards Robust, Locally Linear Deep Networks

no code implementations ICLR 2019 Guang-He Lee, David Alvarez-Melis, Tommi S. Jaakkola

In this paper, we propose a new learning problem to encourage deep networks to have stable derivatives over larger regions.

Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers

1 code implementation NeurIPS 2019 Guang-He Lee, Yang Yuan, Shiyu Chang, Tommi S. Jaakkola

Specifically, an $\ell_2$ bounded adversary cannot alter the ensemble prediction generated by an additive isotropic Gaussian noise, where the radius for the adversary depends on both the variance of the distribution as well as the ensemble margin at the point of interest.

Adversarial Robustness

Bidirectional Inference Networks: A Class of Deep Bayesian Networks for Health Profiling

no code implementations6 Feb 2019 Hao Wang, Chengzhi Mao, Hao He, Ming-Min Zhao, Tommi S. Jaakkola, Dina Katabi

We consider the problem of inferring the values of an arbitrary set of variables (e. g., risk of diseases) given other observed variables (e. g., symptoms and diagnosed diseases) and high-dimensional signals (e. g., MRI images or EEG).

Computational Efficiency EEG +3

Gromov-Wasserstein Alignment of Word Embedding Spaces

no code implementations EMNLP 2018 David Alvarez-Melis, Tommi S. Jaakkola

Cross-lingual or cross-domain correspondences play key roles in tasks ranging from machine translation to transfer learning.

Machine Translation Transfer Learning +3

Game-Theoretic Interpretability for Temporal Modeling

no code implementations30 Jun 2018 Guang-He Lee, David Alvarez-Melis, Tommi S. Jaakkola

In contrast, we focus on temporal modeling and the problem of tailoring the predictor, functionally, towards an interpretable family.

Towards Optimal Transport with Global Invariances

no code implementations25 Jun 2018 David Alvarez-Melis, Stefanie Jegelka, Tommi S. Jaakkola

Many problems in machine learning involve calculating correspondences between sets of objects, such as point clouds or images.

Translation Word Embeddings +1

On the Robustness of Interpretability Methods

2 code implementations21 Jun 2018 David Alvarez-Melis, Tommi S. Jaakkola

We argue that robustness of explanations---i. e., that similar inputs should give rise to similar explanations---is a key desideratum for interpretability.

Towards Robust Interpretability with Self-Explaining Neural Networks

no code implementations NeurIPS 2018 David Alvarez-Melis, Tommi S. Jaakkola

Most recent work on interpretability of complex machine learning models has focused on estimating $\textit{a posteriori}$ explanations for previously trained models around specific predictions.

Structured Optimal Transport

no code implementations17 Dec 2017 David Alvarez-Melis, Tommi S. Jaakkola, Stefanie Jegelka

Optimal Transport has recently gained interest in machine learning for applications ranging from domain adaptation, sentence similarities to deep learning.

BIG-bench Machine Learning Domain Adaptation +1

A causal framework for explaining the predictions of black-box sequence-to-sequence models

no code implementations EMNLP 2017 David Alvarez-Melis, Tommi S. Jaakkola

We interpret the predictions of any black-box structured input-structured output model around a specific input-output pair.

From random walks to distances on unweighted graphs

no code implementations NeurIPS 2015 Tatsunori B. Hashimoto, Yi Sun, Tommi S. Jaakkola

Using these techniques we generalize results on the degeneracy of hitting times and analyze a metric based on the Laplace transformed hitting time (LTHT).

Clustering

Word, graph and manifold embedding from Markov processes

no code implementations18 Sep 2015 Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola

Continuous vector representations of words and objects appear to carry surprisingly rich semantic content.

Dimensionality Reduction Word Embeddings

Metric recovery from directed unweighted graphs

no code implementations20 Nov 2014 Tatsunori B. Hashimoto, Yi Sun, Tommi S. Jaakkola

We demonstrate empirically that the estimator performs well on simulated examples as well as on real-world co-purchasing graphs even with a small number of points and degree scaling as low as $\log(n)$.

Inverse Covariance Estimation for High-Dimensional Data in Linear Time and Space: Spectral Methods for Riccati and Sparse Models

no code implementations26 Sep 2013 Jean Honorio, Tommi S. Jaakkola

Furthermore, instead of obtaining a single solution for a specific regularization parameter, our algorithm finds the whole solution path.

Convergence Rate Analysis of MAP Coordinate Minimization Algorithms

no code implementations NeurIPS 2012 Ofer Meshi, Amir Globerson, Tommi S. Jaakkola

We also provide a simple dual to primal mapping that yields feasible primal solutions with a guaranteed rate of convergence.

Clusters and Coarse Partitions in LP Relaxations

no code implementations NeurIPS 2008 David Sontag, Amir Globerson, Tommi S. Jaakkola

We propose a new class of consistency constraints for Linear Programming (LP) relaxations for finding the most probable (MAP) configuration in graphical models.

Protein Design

Fixing Max-Product: Convergent Message Passing Algorithms for MAP LP-Relaxations

no code implementations NeurIPS 2007 Amir Globerson, Tommi S. Jaakkola

We present a novel message passing algorithm for approximating the MAP problem in graphical models.

Cannot find the paper you are looking for? You can Submit a new open access paper.