Search Results for author: Tamir Hazan

Found 40 papers, 14 papers with code

Learning Generalized Gumbel-max Causal Mechanisms

1 code implementation NeurIPS 2021 Guy Lorberbom, Daniel D. Johnson, Chris J. Maddison, Daniel Tarlow, Tamir Hazan

To perform counterfactual reasoning in Structural Causal Models (SCMs), one needs to know the causal mechanisms, which provide factorizations of conditional distributions into noise sources and deterministic functions mapping realizations of noise to samples.

counterfactual Counterfactual Reasoning

Visual Navigation with Spatial Attention

1 code implementation CVPR 2021 Bar Mayo, Tamir Hazan, Ayellet Tal

This combination of the "what" and the "where" allows the agent to navigate toward the sought-after object effectively.

Navigate Object +3

Factor Graph Attention

1 code implementation CVPR 2019 Idan Schwartz, Seunghak Yu, Tamir Hazan, Alexander Schwing

We address this issue and develop a general attention mechanism for visual dialog which operates on any number of data utilities.

Graph Attention Question Answering +2

Direct Optimization through $\arg \max$ for Discrete Variational Auto-Encoder

2 code implementations ICLR 2019 Guy Lorberbom, Andreea Gane, Tommi Jaakkola, Tamir Hazan

We demonstrate empirically the effectiveness of the direct loss minimization technique in variational autoencoders with both unstructured and structured discrete latent variables.

Direct Optimization through \arg \max for Discrete Variational Auto-Encoder

1 code implementation NeurIPS 2019 Guy Lorberbom, Tommi Jaakkola, Andreea Gane, Tamir Hazan

Reparameterization of variational auto-encoders with continuous random variables is an effective method for reducing the variance of their gradient estimates.

Video and Text Matching with Conditioned Embeddings

1 code implementation21 Oct 2021 Ameen Ali, Idan Schwartz, Tamir Hazan, Lior Wolf

Traditionally video and text matching is done by learning a shared embedding space and the encoding of one modality is independent of the other.

Machine Translation Sentence +4

Learning Randomly Perturbed Structured Predictors for Direct Loss Minimization

1 code implementation11 Jul 2020 Hedda Cohen Indelman, Tamir Hazan

In this work, we learn the variance of these randomized structured predictors and show that it balances better between the learned score function and the randomized noise in structured prediction.

Structured Prediction

A Functional Information Perspective on Model Interpretation

1 code implementation12 Jun 2022 Itai Gat, Nitay Calderon, Roi Reichart, Tamir Hazan

This work suggests a theoretical framework for model interpretability by measuring the contribution of relevant features to the functional entropy of the network with respect to the input.

Learning Discrete Structured Variational Auto-Encoder using Natural Evolution Strategies

1 code implementation ICLR 2022 Alon Berliner, Guy Rotman, Yossi Adi, Roi Reichart, Tamir Hazan

Discrete variational auto-encoders (VAEs) are able to represent semantic latent spaces in generative learning.

Dual Decomposition of Convex Optimization Layers for Consistent Attention in Medical Images

1 code implementation6 Jun 2022 Tom Ron, Michal Weiler-Sagie, Tamir Hazan

Recently, attention mechanisms have shown compelling results both in their predictive performance and in their interpretable qualities.

High Dimensional Inference with Random Maximum A-Posteriori Perturbations

no code implementations10 Feb 2016 Tamir Hazan, Francesco Orabona, Anand D. Sarwate, Subhransu Maji, Tommi Jaakkola

This paper shows that the expected value of perturb-max inference with low dimensional perturbations can be used sequentially to generate unbiased samples from the Gibbs distribution.

Vocal Bursts Intensity Prediction

Tight Bounds for Bandit Combinatorial Optimization

no code implementations24 Feb 2017 Alon Cohen, Tamir Hazan, Tomer Koren

We revisit the study of optimal regret rates in bandit combinatorial optimization---a fundamental framework for sequential decision making under uncertainty that abstracts numerous combinatorial prediction problems.

Combinatorial Optimization Decision Making +1

Co-segmentation for Space-Time Co-located Collections

no code implementations31 Jan 2017 Hadar Averbuch-Elor, Johannes Kopf, Tamir Hazan, Daniel Cohen-Or

Thus, to disambiguate what the common foreground object is, we introduce a weakly-supervised technique, where we assume only a small seed, given in the form of a single segmented image.

Object Segmentation

Online Learning with Feedback Graphs Without the Graphs

no code implementations23 May 2016 Alon Cohen, Tamir Hazan, Tomer Koren

We study an online learning framework introduced by Mannor and Shamir (2011) in which the feedback is specified by a graph, in a setting where the graph may vary from round to round and is \emph{never fully revealed} to the learner.

Steps Toward Deep Kernel Methods from Infinite Neural Networks

no code implementations20 Aug 2015 Tamir Hazan, Tommi Jaakkola

Contemporary deep neural networks exhibit impressive results on practical problems.

Gaussian Processes

On Measure Concentration of Random Maximum A-Posteriori Perturbations

no code implementations15 Oct 2013 Francesco Orabona, Tamir Hazan, Anand D. Sarwate, Tommi Jaakkola

Applying the general result to MAP perturbations can yield a more efficient algorithm to approximate sampling from the Gibbs distribution.

Blending Learning and Inference in Structured Prediction

no code implementations8 Oct 2012 Tamir Hazan, Alexander Schwing, David Mcallester, Raquel Urtasun

In this paper we derive an efficient algorithm to learn the parameters of structured predictors in general graphical models.

Scene Understanding Semantic Segmentation +1

Constraints Based Convex Belief Propagation

no code implementations NeurIPS 2016 YAniv Tenzer, Alex Schwing, Kevin Gimpel, Tamir Hazan

Inference in Markov random fields subject to consistency structure is a fundamental problem that arises in many real-life applications.

Globally Convergent Dual MAP LP Relaxation Solvers using Fenchel-Young Margins

no code implementations NeurIPS 2012 Alex Schwing, Tamir Hazan, Marc Pollefeys, Raquel Urtasun

While finding the exact solution for the MAP inference problem is intractable for many real-world tasks, MAP LP relaxations have been shown to be very effective in practice.

A Primal-Dual Message-Passing Algorithm for Approximated Large Scale Structured Prediction

no code implementations NeurIPS 2010 Tamir Hazan, Raquel Urtasun

We then propose an intuitive approximation for structured prediction problems using Fenchel duality based on a local entropy approximation that computes the exact gradients of the approximated problem and is guaranteed to converge.

Image Denoising Structured Prediction

Direct Loss Minimization for Structured Prediction

no code implementations NeurIPS 2010 Tamir Hazan, Joseph Keshet, David A. Mcallester

In discriminative machine learning one is interested in training a system to optimize a certain desired measure of performance, or loss.

Binary Classification Machine Translation +2

Congruency-Based Reranking

no code implementations CVPR 2014 Itai Ben-Shalom, Noga Levy, Lior Wolf, Nachum Dershowitz, Adiel Ben-Shalom, Roni Shweka, Yaacov Choueka, Tamir Hazan, Yaniv Bar

The utility of the tool is demonstrated within the context of visual search of documents from the Cairo Genizah and for retrieval of paintings by the same artist and in the same style.

Clustering Re-Ranking +1

Direct Policy Gradients: Direct Optimization of Policies in Discrete Action Spaces

no code implementations NeurIPS 2020 Guy Lorberbom, Chris J. Maddison, Nicolas Heess, Tamir Hazan, Daniel Tarlow

A main benefit of DirPG algorithms is that they allow the insertion of domain knowledge in the form of upper bounds on return-to-go at training time, like is used in heuristic search, while still directly computing a policy gradient.

A Formal Approach to Explainability

no code implementations15 Jan 2020 Lior Wolf, Tomer Galanti, Tamir Hazan

We regard explanations as a blending of the input sample and the model's output and offer a few definitions that capture various desired properties of the function that generates these explanations.

On the generalization of bayesian deep nets for multi-class classification

no code implementations23 Feb 2020 Yossi Adi, Yaniv Nemcovsky, Alex Schwing, Tamir Hazan

Generalization bounds which assess the difference between the true risk and the empirical risk have been studied extensively.

General Classification Generalization Bounds +1

Generalized Planning With Deep Reinforcement Learning

no code implementations5 May 2020 Or Rivlin, Tamir Hazan, Erez Karpas

A hallmark of intelligence is the ability to deduce general principles from examples, which are correct beyond the range of those observed.

reinforcement-learning Reinforcement Learning (RL)

Constant Random Perturbations Provide Adversarial Robustness with Minimal Effect on Accuracy

1 code implementation15 Mar 2021 Bronya Roni Chernyak, Bhiksha Raj, Tamir Hazan, Joseph Keshet

This paper proposes an attack-independent (non-adversarial training) technique for improving adversarial robustness of neural network models, with minimal loss of standard accuracy.

Adversarial Robustness

Mixing between the Cross Entropy and the Expectation Loss Terms

no code implementations12 Sep 2021 Barak Battash, Lior Wolf, Tamir Hazan

The cross entropy loss is widely used due to its effectiveness and solid theoretical grounding.

PAC-Bayesian Neural Network Bounds

no code implementations25 Sep 2019 Yossi Adi, Alex Schwing, Tamir Hazan

Bayesian neural networks, which both use the negative log-likelihood loss function and average their predictions using a learned posterior over the parameters, have been used successfully across many scientific fields, partly due to their ability to `effortlessly' extract desired representations from many large-scale datasets.

Generalization Bounds

Latent Space Explanation by Intervention

no code implementations9 Dec 2021 Itai Gat, Guy Lorberbom, Idan Schwartz, Tamir Hazan

The success of deep neural nets heavily relies on their ability to encode complex relations between their input and their output.

On the Importance of Gradient Norm in PAC-Bayesian Bounds

no code implementations12 Oct 2022 Itai Gat, Yossi Adi, Alexander Schwing, Tamir Hazan

Generalization bounds which assess the difference between the true risk and the empirical risk, have been studied extensively.

Generalization Bounds

Layer Collaboration in the Forward-Forward Algorithm

no code implementations21 May 2023 Guy Lorberbom, Itai Gat, Yossi Adi, Alex Schwing, Tamir Hazan

We show that the current version of the forward-forward algorithm is suboptimal when considering information flow in the network, resulting in a lack of collaboration between layers of the network.

Cannot find the paper you are looking for? You can Submit a new open access paper.