no code implementations • 12 Oct 2022 • Itai Gat, Yossi Adi, Alexander Schwing, Tamir Hazan
Generalization bounds which assess the difference between the true risk and the empirical risk, have been studied extensively.
1 code implementation • 12 Jun 2022 • Itai Gat, Nitay Calderon, Roi Reichart, Tamir Hazan
This work suggests a theoretical framework for model interpretability by measuring the contribution of relevant features to the functional entropy of the network with respect to the input.
1 code implementation • 6 Jun 2022 • Tom Ron, Michal Weiler-Sagie, Tamir Hazan
Recently, attention mechanisms have shown compelling results both in their predictive performance and in their interpretable qualities.
1 code implementation • ICLR 2022 • Alon Berliner, Guy Rotman, Yossi Adi, Roi Reichart, Tamir Hazan
Discrete variational auto-encoders (VAEs) are able to represent semantic latent spaces in generative learning.
no code implementations • 9 Dec 2021 • Itai Gat, Guy Lorberbom, Idan Schwartz, Tamir Hazan
The success of deep neural nets heavily relies on their ability to encode complex relations between their input and their output.
1 code implementation • NeurIPS 2021 • Guy Lorberbom, Daniel D. Johnson, Chris J. Maddison, Daniel Tarlow, Tamir Hazan
To perform counterfactual reasoning in Structural Causal Models (SCMs), one needs to know the causal mechanisms, which provide factorizations of conditional distributions into noise sources and deterministic functions mapping realizations of noise to samples.
1 code implementation • 21 Oct 2021 • Ameen Ali, Idan Schwartz, Tamir Hazan, Lior Wolf
Traditionally video and text matching is done by learning a shared embedding space and the encoding of one modality is independent of the other.
no code implementations • 12 Sep 2021 • Barak Battash, Lior Wolf, Tamir Hazan
The cross entropy loss is widely used due to its effectiveness and solid theoretical grounding.
1 code implementation • CVPR 2021 • Bar Mayo, Tamir Hazan, Ayellet Tal
This combination of the "what" and the "where" allows the agent to navigate toward the sought-after object effectively.
1 code implementation • 15 Mar 2021 • Bronya Roni Chernyak, Bhiksha Raj, Tamir Hazan, Joseph Keshet
This paper proposes an attack-independent (non-adversarial training) technique for improving adversarial robustness of neural network models, with minimal loss of standard accuracy.
1 code implementation • NeurIPS 2020 • Itai Gat, Idan Schwartz, Alexander Schwing, Tamir Hazan
However, regularization with the functional entropy is challenging.
Ranked #3 on
Visual Question Answering (VQA)
on VQA-CP
no code implementations • ICLR 2021 • Shauharda Khadka, Estelle Aflalo, Mattias Marder, Avrech Ben-David, Santiago Miret, Shie Mannor, Tamir Hazan, Hanlin Tang, Somdeb Majumdar
For deep neural network accelerators, memory movement is both energetically expensive and can bound computation.
1 code implementation • 11 Jul 2020 • Hedda Cohen Indelman, Tamir Hazan
In this work, we learn the variance of these randomized structured predictors and show that it balances better between the learned score function and the randomized noise in structured prediction.
no code implementations • 5 May 2020 • Or Rivlin, Tamir Hazan, Erez Karpas
A hallmark of intelligence is the ability to deduce general principles from examples, which are correct beyond the range of those observed.
no code implementations • 23 Feb 2020 • Yossi Adi, Yaniv Nemcovsky, Alex Schwing, Tamir Hazan
Generalization bounds which assess the difference between the true risk and the empirical risk have been studied extensively.
no code implementations • 15 Jan 2020 • Lior Wolf, Tomer Galanti, Tamir Hazan
We regard explanations as a blending of the input sample and the model's output and offer a few definitions that capture various desired properties of the function that generates these explanations.
1 code implementation • NeurIPS 2019 • Guy Lorberbom, Tommi Jaakkola, Andreea Gane, Tamir Hazan
Reparameterization of variational auto-encoders with continuous random variables is an effective method for reducing the variance of their gradient estimates.
no code implementations • 25 Sep 2019 • Yossi Adi, Alex Schwing, Tamir Hazan
Bayesian neural networks, which both use the negative log-likelihood loss function and average their predictions using a learned posterior over the parameters, have been used successfully across many scientific fields, partly due to their ability to `effortlessly' extract desired representations from many large-scale datasets.
no code implementations • NeurIPS 2020 • Guy Lorberbom, Chris J. Maddison, Nicolas Heess, Tamir Hazan, Daniel Tarlow
A main benefit of DirPG algorithms is that they allow the insertion of domain knowledge in the form of upper bounds on return-to-go at training time, like is used in heuristic search, while still directly computing a policy gradient.
1 code implementation • CVPR 2019 • Idan Schwartz, Seunghak Yu, Tamir Hazan, Alexander Schwing
We address this issue and develop a general attention mechanism for visual dialog which operates on any number of data utilities.
Ranked #1 on
Visual Dialog
on VisDial v0.9 val
no code implementations • TACL 2019 • Amichay Doitch, Ram Yazdi, Tamir Hazan, Roi Reichart
In this paper we propose a perturbation-based approach where sampling from a probabilistic model is computationally efficient.
2 code implementations • ICLR 2019 • Guy Lorberbom, Andreea Gane, Tommi Jaakkola, Tamir Hazan
We demonstrate empirically the effectiveness of the direct loss minimization technique in variational autoencoders with both unstructured and structured discrete latent variables.
1 code implementation • NeurIPS 2017 • Idan Schwartz, Alexander G. Schwing, Tamir Hazan
The quest for algorithms that enable cognitive abilities is an important part of machine learning.
no code implementations • 24 Feb 2017 • Alon Cohen, Tamir Hazan, Tomer Koren
We revisit the study of optimal regret rates in bandit combinatorial optimization---a fundamental framework for sequential decision making under uncertainty that abstracts numerous combinatorial prediction problems.
no code implementations • 31 Jan 2017 • Hadar Averbuch-Elor, Johannes Kopf, Tamir Hazan, Daniel Cohen-Or
Thus, to disambiguate what the common foreground object is, we introduce a weakly-supervised technique, where we assume only a small seed, given in the form of a single segmented image.
no code implementations • NeurIPS 2016 • YAniv Tenzer, Alex Schwing, Kevin Gimpel, Tamir Hazan
Inference in Markov random fields subject to consistency structure is a fundamental problem that arises in many real-life applications.
no code implementations • 23 May 2016 • Alon Cohen, Tamir Hazan, Tomer Koren
We study an online learning framework introduced by Mannor and Shamir (2011) in which the feedback is specified by a graph, in a setting where the graph may vary from round to round and is \emph{never fully revealed} to the learner.
no code implementations • 10 Feb 2016 • Tamir Hazan, Francesco Orabona, Anand D. Sarwate, Subhransu Maji, Tommi Jaakkola
This paper shows that the expected value of perturb-max inference with low dimensional perturbations can be used sequentially to generate unbiased samples from the Gibbs distribution.
no code implementations • 9 Jan 2016 • Jörg Hendrik Kappes, Paul Swoboda, Bogdan Savchynskyy, Tamir Hazan, Christoph Schnörr
We present a probabilistic graphical model formulation for the graph clustering problem.
no code implementations • 20 Aug 2015 • Tamir Hazan, Tommi Jaakkola
Contemporary deep neural networks exhibit impressive results on practical problems.
no code implementations • CVPR 2014 • Itai Ben-Shalom, Noga Levy, Lior Wolf, Nachum Dershowitz, Adiel Ben-Shalom, Roni Shweka, Yaacov Choueka, Tamir Hazan, Yaniv Bar
The utility of the tool is demonstrated within the context of visual search of documents from the Cairo Genizah and for retrieval of paintings by the same artist and in the same style.
no code implementations • NeurIPS 2013 • Tamir Hazan, Subhransu Maji, Joseph Keshet, Tommi Jaakkola
In this work we develop efficient methods for learning random MAP predictors for structured label problems.
no code implementations • 15 Oct 2013 • Francesco Orabona, Tamir Hazan, Anand D. Sarwate, Tommi Jaakkola
Applying the general result to MAP perturbations can yield a more efficient algorithm to approximate sampling from the Gibbs distribution.
no code implementations • NeurIPS 2013 • Tamir Hazan, Subhransu Maji, Tommi Jaakkola
In this paper we describe how MAP inference can be used to sample efficiently from Gibbs distributions.
no code implementations • NeurIPS 2012 • Alex Schwing, Tamir Hazan, Marc Pollefeys, Raquel Urtasun
While finding the exact solution for the MAP inference problem is intractable for many real-world tasks, MAP LP relaxations have been shown to be very effective in practice.
no code implementations • 8 Oct 2012 • Tamir Hazan, Alexander Schwing, David Mcallester, Raquel Urtasun
In this paper we derive an efficient algorithm to learn the parameters of structured predictors in general graphical models.
no code implementations • NeurIPS 2010 • Tamir Hazan, Joseph Keshet, David A. Mcallester
In discriminative machine learning one is interested in training a system to optimize a certain desired measure of performance, or loss.
no code implementations • NeurIPS 2010 • Tamir Hazan, Raquel Urtasun
We then propose an intuitive approximation for structured prediction problems using Fenchel duality based on a local entropy approximation that computes the exact gradients of the approximated problem and is guaranteed to converge.