Search Results for author: Marton Havasi

Found 19 papers, 8 papers with code

Diverse Concept Proposals for Concept Bottleneck Models

no code implementations24 Dec 2024 Katrina Brown, Marton Havasi, Finale Doshi-Velez

On EHR data, our model was able to identify 4 out of the 5 pre-defined concepts without supervision.

Flow Matching Guide and Code

1 code implementation9 Dec 2024 Yaron Lipman, Marton Havasi, Peter Holderrieth, Neta Shaul, Matt Le, Brian Karrer, Ricky T. Q. Chen, David Lopez-Paz, Heli Ben-Hamu, Itai Gat

Flow Matching (FM) is a recent framework for generative modeling that has achieved state-of-the-art performance across various domains, including image, video, audio, speech, and biological structures.

Text Generation

Flow Matching with General Discrete Paths: A Kinetic-Optimal Perspective

no code implementations4 Dec 2024 Neta Shaul, Itai Gat, Marton Havasi, Daniel Severo, Anuroop Sriram, Peter Holderrieth, Brian Karrer, Yaron Lipman, Ricky T. Q. Chen

Through the lens of optimizing the symmetric kinetic energy, we propose velocity formulas that can be applied to any given probability path, completely decoupling the probability and velocity, and giving the user the freedom to specify any desirable probability path based on expert knowledge specific to the data domain.

Image Generation Text Generation

Boosting Latent Diffusion with Perceptual Objectives

no code implementations6 Nov 2024 Tariq Berrada, Pietro Astolfi, Melissa Hall, Marton Havasi, Yohann Benchetrit, Adriana Romero-Soriano, Karteek Alahari, Michal Drozdzal, Jakob Verbeek

LDMs learn the data distribution in the latent space of an autoencoder (AE) and produce images by mapping the generated latents into RGB image space using the AE decoder.

Decoder

Exact Byte-Level Probabilities from Tokenized Language Models for FIM-Tasks and Model Ensembles

no code implementations11 Oct 2024 Buu Phan, Brandon Amos, Itai Gat, Marton Havasi, Matthew Muckley, Karen Ullrich

In FIM tasks where input prompts may terminate mid-token, leading to out-of-distribution tokenization, our method mitigates performance degradation and achieves an approximately 18% improvement in FIM coding benchmarks, consistently outperforming the standard token healing fix.

LEMMA

Guarantee Regions for Local Explanations

1 code implementation20 Feb 2024 Marton Havasi, Sonali Parbhoo, Finale Doshi-Velez

Interpretability methods that utilise local surrogate models (e. g. LIME) are very good at describing the behaviour of the predictive model at a point of interest, but they are not guaranteed to extrapolate to the local region surrounding the point.

What Makes a Good Explanation?: A Harmonized View of Properties of Explanations

no code implementations10 Nov 2022 Zixi Chen, Varshini Subhash, Marton Havasi, Weiwei Pan, Finale Doshi-Velez

In this work, we survey properties defined in interpretable machine learning papers, synthesize them based on what they actually measure, and describe the trade-offs between different formulations of these properties.

Interpretable Machine Learning

Training independent subnetworks for robust prediction

2 code implementations ICLR 2021 Marton Havasi, Rodolphe Jenatton, Stanislav Fort, Jeremiah Zhe Liu, Jasper Snoek, Balaji Lakshminarayanan, Andrew M. Dai, Dustin Tran

Recent approaches to efficiently ensemble neural networks have shown that strong robustness and uncertainty performance can be achieved with a negligible gain in parameters over the original network.

Prediction

Compression without Quantization

no code implementations25 Sep 2019 Gergely Flamich, Marton Havasi, José Miguel Hernández-Lobato

Standard compression algorithms work by mapping an image to discrete code using an encoder from which the original image can be reconstructed through a decoder.

Decoder Image Compression +1

Refining the variational posterior through iterative optimization

no code implementations25 Sep 2019 Marton Havasi, Jasper Snoek, Dustin Tran, Jonathan Gordon, José Miguel Hernández-Lobato

Variational inference (VI) is a popular approach for approximate Bayesian inference that is particularly promising for highly parameterized models such as deep neural networks.

Bayesian Inference Variational Inference

Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters

2 code implementations ICLR 2019 Marton Havasi, Robert Peharz, José Miguel Hernández-Lobato

While deep neural networks are a highly successful model class, their large memory footprint puts considerable strain on energy consumption, communication bandwidth, and storage requirements.

Neural Network Compression Quantization

Deep Gaussian Processes with Decoupled Inducing Inputs

no code implementations9 Jan 2018 Marton Havasi, José Miguel Hernández-Lobato, Juan José Murillo-Fuentes

Deep Gaussian Processes (DGP) are hierarchical generalizations of Gaussian Processes (GP) that have proven to work effectively on a multiple supervised regression tasks.

Gaussian Processes

Cannot find the paper you are looking for? You can Submit a new open access paper.