Search Results for author: David Alvarez-Melis

Found 32 papers, 11 papers with code

Continuous Language Model Interpolation for Dynamic and Controllable Text Generation

2 code implementations10 Apr 2024 Sara Kangaslahti, David Alvarez-Melis

We empirically show that varying the interpolation weights yields predictable and consistent change in the model outputs with respect to all of the controlled attributes.

Language Modelling Text Generation

Distributional Dataset Distillation with Subtask Decomposition

1 code implementation1 Mar 2024 Tian Qin, Zhiwei Deng, David Alvarez-Melis

What does a neural network learn when training from a task-specific dataset?


Tag-LLM: Repurposing General-Purpose LLMs for Specialized Domains

1 code implementation6 Feb 2024 Junhong Shen, Neil Tenenholtz, James Brian Hall, David Alvarez-Melis, Nicolo Fusi

Large Language Models (LLMs) have demonstrated remarkable proficiency in understanding and generating natural language.

TAG Zero-shot Generalization

Generating Synthetic Datasets by Interpolating along Generalized Geodesics

no code implementations12 Jun 2023 Jiaojiao Fan, David Alvarez-Melis

We compute these geodesics using a recent notion of distance between labeled datasets, and derive alternative interpolation schemes based on it: using either barycentric projections or optimal transport maps, the latter computed using recent neural OT methods.

Transfer Learning

Transfer RL via the Undo Maps Formalism

no code implementations26 Nov 2022 Abhi Gupta, Ted Moskovitz, David Alvarez-Melis, Aldo Pacchiano

Transferring knowledge across domains is one of the most fundamental problems in machine learning, but doing so effectively in the context of reinforcement learning remains largely an open problem.

Imitation Learning Transfer Learning

Budget-Constrained Bounds for Mini-Batch Estimation of Optimal Transport

no code implementations24 Oct 2022 David Alvarez-Melis, Nicolò Fusi, Lester Mackey, Tal Wagner

Optimal Transport (OT) is a fundamental tool for comparing probability distributions, but its exact computation remains prohibitive for large datasets.

InfoOT: Information Maximizing Optimal Transport

1 code implementation6 Oct 2022 Ching-Yao Chuang, Stefanie Jegelka, David Alvarez-Melis

Optimal transport aligns samples across distributions by minimizing the transportation cost between them, e. g., the geometric distances.

Domain Adaptation Retrieval

Neural Unbalanced Optimal Transport via Cycle-Consistent Semi-Couplings

no code implementations30 Sep 2022 Frederike Lübeck, Charlotte Bunne, Gabriele Gut, Jacobo Sarabia del Castillo, Lucas Pelkmans, David Alvarez-Melis

However, the usual formulation of OT assumes conservation of mass, which is violated in unbalanced scenarios in which the population size changes (e. g., cell proliferation or death) between measurements.

Interpretable Distribution Shift Detection using Optimal Transport

no code implementations4 Aug 2022 Neha Hulkund, Nicolo Fusi, Jennifer Wortman Vaughan, David Alvarez-Melis

We propose a method to identify and characterize distribution shifts in classification datasets based on optimal transport.

Why GANs are overkill for NLP

no code implementations19 May 2022 David Alvarez-Melis, Vikas Garg, Adam Tauman Kalai

We show that, while it may seem that maximizing likelihood is inherently different than minimizing distinguishability, this distinction is largely artificial and only holds for limited models.

Text Generation

Hierarchical Optimal Transport for Comparing Histopathology Datasets

no code implementations18 Apr 2022 Anna Yeaton, Rahul G. Krishnan, Rebecca Mieloszyk, David Alvarez-Melis, Grace Huynh

Scarcity of labeled histopathology data limits the applicability of deep learning methods to under-profiled cancer types and labels.

Transfer Learning Type prediction

Optimizing Functionals on the Space of Probabilities with Input Convex Neural Networks

no code implementations1 Jun 2021 David Alvarez-Melis, Yair Schiff, Youssef Mroueh

Gradient flows are a powerful tool for optimizing functionals in general metric spaces, including the space of probabilities endowed with the Wasserstein metric.

Dataset Dynamics via Gradient Flows in Probability Space

1 code implementation24 Oct 2020 David Alvarez-Melis, Nicolò Fusi

Various machine learning tasks, from generative modeling to domain adaptation, revolve around the concept of dataset transformation and manipulation.

BIG-bench Machine Learning Domain Adaptation +1

Geometric Dataset Distances via Optimal Transport

1 code implementation NeurIPS 2020 David Alvarez-Melis, Nicolò Fusi

The notion of task similarity is at the core of various machine learning paradigms, such as domain adaptation and meta-learning.

Domain Adaptation Meta-Learning +1

Unsupervised Hierarchy Matching with Optimal Transport over Hyperbolic Spaces

no code implementations6 Nov 2019 David Alvarez-Melis, Youssef Mroueh, Tommi S. Jaakkola

This paper focuses on the problem of unsupervised alignment of hierarchical data such as ontologies or lexical databases.

Ontology Matching Word Alignment

Probabilistic Bias Mitigation in Word Embeddings

no code implementations31 Oct 2019 Hailey James, David Alvarez-Melis

In this work we propose a probabilistic view of word embedding bias.

Word Embeddings

Weight of Evidence as a Basis for Human-Oriented Explanations

1 code implementation29 Oct 2019 David Alvarez-Melis, Hal Daumé III, Jennifer Wortman Vaughan, Hanna Wallach

Interpretability is an elusive but highly sought-after characteristic of modern machine learning methods.


Towards Robust, Locally Linear Deep Networks

no code implementations ICLR 2019 Guang-He Lee, David Alvarez-Melis, Tommi S. Jaakkola

In this paper, we propose a new learning problem to encourage deep networks to have stable derivatives over larger regions.

Learning Generative Models across Incomparable Spaces

no code implementations14 May 2019 Charlotte Bunne, David Alvarez-Melis, Andreas Krause, Stefanie Jegelka

Generative Adversarial Networks have shown remarkable success in learning a distribution that faithfully recovers a reference distribution in its entirety.

Relational Reasoning

Gromov-Wasserstein Alignment of Word Embedding Spaces

no code implementations EMNLP 2018 David Alvarez-Melis, Tommi S. Jaakkola

Cross-lingual or cross-domain correspondences play key roles in tasks ranging from machine translation to transfer learning.

Machine Translation Transfer Learning +3

Game-Theoretic Interpretability for Temporal Modeling

no code implementations30 Jun 2018 Guang-He Lee, David Alvarez-Melis, Tommi S. Jaakkola

In contrast, we focus on temporal modeling and the problem of tailoring the predictor, functionally, towards an interpretable family.

Towards Optimal Transport with Global Invariances

no code implementations25 Jun 2018 David Alvarez-Melis, Stefanie Jegelka, Tommi S. Jaakkola

Many problems in machine learning involve calculating correspondences between sets of objects, such as point clouds or images.

Translation Word Embeddings +1

On the Robustness of Interpretability Methods

2 code implementations21 Jun 2018 David Alvarez-Melis, Tommi S. Jaakkola

We argue that robustness of explanations---i. e., that similar inputs should give rise to similar explanations---is a key desideratum for interpretability.

Towards Robust Interpretability with Self-Explaining Neural Networks

no code implementations NeurIPS 2018 David Alvarez-Melis, Tommi S. Jaakkola

Most recent work on interpretability of complex machine learning models has focused on estimating $\textit{a posteriori}$ explanations for previously trained models around specific predictions.

Structured Optimal Transport

no code implementations17 Dec 2017 David Alvarez-Melis, Tommi S. Jaakkola, Stefanie Jegelka

Optimal Transport has recently gained interest in machine learning for applications ranging from domain adaptation, sentence similarities to deep learning.

BIG-bench Machine Learning Domain Adaptation +1

A causal framework for explaining the predictions of black-box sequence-to-sequence models

no code implementations EMNLP 2017 David Alvarez-Melis, Tommi S. Jaakkola

We interpret the predictions of any black-box structured input-structured output model around a specific input-output pair.

Distributional Adversarial Networks

1 code implementation ICLR 2018 Chengtao Li, David Alvarez-Melis, Keyulu Xu, Stefanie Jegelka, Suvrit Sra

We propose a framework for adversarial training that relies on a sample rather than a single sample point as the fundamental unit of discrimination.

Domain Adaptation

Word, graph and manifold embedding from Markov processes

no code implementations18 Sep 2015 Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola

Continuous vector representations of words and objects appear to carry surprisingly rich semantic content.

Dimensionality Reduction Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.