1 code implementation • 16 Jun 2023 • Victor Quach, Adam Fisch, Tal Schuster, Adam Yala, Jae Ho Sohn, Tommi S. Jaakkola, Regina Barzilay
Translating this process to conformal prediction, we calibrate a stopping rule for sampling different outputs from the LM that get added to a growing set of candidates until we are confident that the output set is sufficient.
1 code implementation • 8 Apr 2023 • Mohamed Amine Ketata, Cedrik Laue, Ruslan Mammadov, Hannes Stärk, Menghua Wu, Gabriele Corso, Céline Marquet, Regina Barzilay, Tommi S. Jaakkola
Understanding how proteins structurally interact is crucial to modern biology, with applications in drug discovery and protein design.
no code implementations • 31 Mar 2023 • Homa Esfahanizadeh, Adam Yala, Rafael G. L. D'Oliveira, Andrea J. D. Jaba, Victor Quach, Ken R. Duffy, Tommi S. Jaakkola, Vinod Vaikuntanathan, Manya Ghobadi, Regina Barzilay, Muriel Médard
Allowing organizations to share their data for training of machine learning (ML) models without unintended information leakage is an open problem in practice.
1 code implementation • ICLR 2022 • Shangyuan Tong, Timur Garipov, Yang Zhang, Shiyu Chang, Tommi S. Jaakkola
Furthermore, we show that our approach can be viewed as a limit of existing notions of alignment by increasing transportation assignment tolerance.
no code implementations • 28 Jan 2022 • Adam Yala, Victor Quach, Homa Esfahanizadeh, Rafael G. L. D'Oliveira, Ken R. Duffy, Muriel Médard, Tommi S. Jaakkola, Regina Barzilay
We quantify privacy as the number of attacker guesses required to re-identify a single image (guesswork).
1 code implementation • NeurIPS 2021 • Mo Yu, Yang Zhang, Shiyu Chang, Tommi S. Jaakkola
The selection mechanism is commonly integrated into the model itself by specifying a two-component cascaded system consisting of a rationale generator, which makes a binary selection of the input features (which is the rationale), and a predictor, which predicts the output based only on the selected features.
no code implementations • 29 Sep 2021 • Tianxiao Shen, Regina Barzilay, Tommi S. Jaakkola
Existing methods for style transfer operate either with paired sentences or distributionally matched corpora which differ only in the desired style.
no code implementations • 29 Sep 2021 • Adam Fisch, Tal Schuster, Tommi S. Jaakkola, Regina Barzilay
In this paper, we develop a new approach to conformal prediction in which we aim to output a precise set of promising prediction candidates that is guaranteed to contain a limited number of incorrect answers.
1 code implementation • NeurIPS 2021 • Octavian-Eugen Ganea, Lagnajit Pattanaik, Connor W. Coley, Regina Barzilay, Klavs F. Jensen, William H. Green, Tommi S. Jaakkola
Prediction of a molecule's 3D conformer ensemble from the molecular graph holds a key role in areas of cheminformatics and drug discovery.
1 code implementation • 4 Jun 2021 • Adam Yala, Homa Esfahanizadeh, Rafael G. L. D' Oliveira, Ken R. Duffy, Manya Ghobadi, Tommi S. Jaakkola, Vinod Vaikuntanathan, Regina Barzilay, Muriel Medard
We propose to approximate this family of encoding functions through random deep neural networks.
no code implementations • 19 Dec 2020 • Han Zhao, Chen Dan, Bryon Aragam, Tommi S. Jaakkola, Geoffrey J. Gordon, Pradeep Ravikumar
A wide range of machine learning applications such as privacy-preserving learning, algorithmic fairness, and domain adaptation/generalization among others, involve learning invariant representations of the data that aim to achieve two competing goals: (a) maximize information or accuracy with respect to a target response, and (b) maximize invariance or independence with respect to a set of protected features (e. g., for fairness, privacy, etc).
1 code implementation • ICLR 2020 • Guang-He Lee, Tommi S. Jaakkola
We show how neural models can be used to realize piece-wise constant functions such as decision trees.
1 code implementation • ICML 2020 • Shiyu Chang, Yang Zhang, Mo Yu, Tommi S. Jaakkola
Selective rationalization improves neural network interpretability by identifying a small subset of input features -- the rationale -- that best explains or supports the prediction.
no code implementations • 6 Nov 2019 • David Alvarez-Melis, Youssef Mroueh, Tommi S. Jaakkola
This paper focuses on the problem of unsupervised alignment of hierarchical data such as ontologies or lexical databases.
2 code implementations • IJCNLP 2019 • Mo Yu, Shiyu Chang, Yang Zhang, Tommi S. Jaakkola
Moreover, we explicitly control the rationale complement via an adversary so as not to leave any useful information out of the selection.
1 code implementation • NeurIPS 2019 • Shiyu Chang, Yang Zhang, Mo Yu, Tommi S. Jaakkola
Selection of input features such as relevant pieces of text has become a common technique of highlighting how complex neural predictors operate.
no code implementations • 21 Oct 2019 • Benson Chen, Tianxiao Shen, Tommi S. Jaakkola, Regina Barzilay
We propose a new model for making generalizable and diverse retrosynthetic reaction predictions.
1 code implementation • 30 Sep 2019 • Guang-He Lee, Tommi S. Jaakkola
We show how neural models can be used to realize piece-wise constant functions such as decision trees.
Ranked #1 on
Drug Discovery
on PDBbind
no code implementations • ICLR 2019 • Guang-He Lee, David Alvarez-Melis, Tommi S. Jaakkola
In this paper, we propose a new learning problem to encourage deep networks to have stable derivatives over larger regions.
1 code implementation • NeurIPS 2019 • Guang-He Lee, Yang Yuan, Shiyu Chang, Tommi S. Jaakkola
Specifically, an $\ell_2$ bounded adversary cannot alter the ensemble prediction generated by an additive isotropic Gaussian noise, where the radius for the adversary depends on both the variance of the distribution as well as the ensemble margin at the point of interest.
no code implementations • 26 Feb 2019 • Guang-He Lee, Wengong Jin, David Alvarez-Melis, Tommi S. Jaakkola
We provide a new approach to training neural models to exhibit transparency in a well-defined, functional manner.
no code implementations • 6 Feb 2019 • Hao Wang, Chengzhi Mao, Hao He, Ming-Min Zhao, Tommi S. Jaakkola, Dina Katabi
We consider the problem of inferring the values of an arbitrary set of variables (e. g., risk of diseases) given other observed variables (e. g., symptoms and diagnosed diseases) and high-dimensional signals (e. g., MRI images or EEG).
no code implementations • Chemical Science 2018 • Connor W. Coley, Wengong Jin, Luke Rogers, Timothy F. Jamison, Tommi S. Jaakkola, William H. Green, Regina Barzilay, Klavs F. Jensen
We present a supervised learning approach to predict the products of organic reactions given their reactants, reagents, and solvent(s).
no code implementations • EMNLP 2018 • David Alvarez-Melis, Tommi S. Jaakkola
Cross-lingual or cross-domain correspondences play key roles in tasks ranging from machine translation to transfer learning.
no code implementations • 30 Jun 2018 • Guang-He Lee, David Alvarez-Melis, Tommi S. Jaakkola
In contrast, we focus on temporal modeling and the problem of tailoring the predictor, functionally, towards an interpretable family.
no code implementations • 25 Jun 2018 • David Alvarez-Melis, Stefanie Jegelka, Tommi S. Jaakkola
Many problems in machine learning involve calculating correspondences between sets of objects, such as point clouds or images.
2 code implementations • 21 Jun 2018 • David Alvarez-Melis, Tommi S. Jaakkola
We argue that robustness of explanations---i. e., that similar inputs should give rise to similar explanations---is a key desideratum for interpretability.
no code implementations • NeurIPS 2018 • David Alvarez-Melis, Tommi S. Jaakkola
Most recent work on interpretability of complex machine learning models has focused on estimating $\textit{a posteriori}$ explanations for previously trained models around specific predictions.
no code implementations • 17 Dec 2017 • David Alvarez-Melis, Tommi S. Jaakkola, Stefanie Jegelka
Optimal Transport has recently gained interest in machine learning for applications ranging from domain adaptation, sentence similarities to deep learning.
no code implementations • ICML 2017 • Ming-Min Zhao, Shichao Yue, Dina Katabi, Tommi S. Jaakkola, Matt T. Bianchi
We focus on predicting sleep stages from radio measurements without any attached sensors on subjects.
no code implementations • EMNLP 2017 • David Alvarez-Melis, Tommi S. Jaakkola
We interpret the predictions of any black-box structured input-structured output model around a specific input-output pair.
no code implementations • TACL 2016 • Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola
Continuous word representations have been remarkably useful across NLP tasks but remain poorly understood.
no code implementations • NeurIPS 2015 • Tatsunori B. Hashimoto, Yi Sun, Tommi S. Jaakkola
Using these techniques we generalize results on the degeneracy of hitting times and analyze a metric based on the Laplace transformed hitting time (LTHT).
no code implementations • 18 Sep 2015 • Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola
Continuous vector representations of words and objects appear to carry surprisingly rich semantic content.
no code implementations • 20 Nov 2014 • Tatsunori B. Hashimoto, Yi Sun, Tommi S. Jaakkola
We demonstrate empirically that the estimator performs well on simulated examples as well as on real-world co-purchasing graphs even with a small number of points and degree scaling as low as $\log(n)$.
no code implementations • 26 Sep 2013 • Jean Honorio, Tommi S. Jaakkola
Furthermore, instead of obtaining a single solution for a specific regularization parameter, our algorithm finds the whole solution path.
no code implementations • NeurIPS 2012 • Ofer Meshi, Amir Globerson, Tommi S. Jaakkola
We also provide a simple dual to primal mapping that yields feasible primal solutions with a guaranteed rate of convergence.
no code implementations • NeurIPS 2010 • David Sontag, Ofer Meshi, Amir Globerson, Tommi S. Jaakkola
The problem of learning to predict structured labels is of key importance in many applications.
no code implementations • NeurIPS 2008 • David Sontag, Amir Globerson, Tommi S. Jaakkola
We propose a new class of consistency constraints for Linear Programming (LP) relaxations for finding the most probable (MAP) configuration in graphical models.
no code implementations • NeurIPS 2007 • Amir Globerson, Tommi S. Jaakkola
We present a novel message passing algorithm for approximating the MAP problem in graphical models.