1 code implementation • 23 Dec 2024 • Oren Barkan, Yehonatan Elisha, Jonathan Weill, Noam Koenigstein
To address the diversity in metrics and accommodate the variety of baseline representations in a unified manner, we propose Baseline Exploration-Exploitation (BEE) - a path-integration method that introduces randomness to the integration process by modeling the baseline as a learned random tensor.
1 code implementation • 5 Mar 2024 • Yakir Yehuda, Itzik Malkiel, Oren Barkan, Jonathan Weill, Royi Ronen, Noam Koenigstein
Despite the many advances of Large Language Models (LLMs) and their unprecedented rapid evolution, their impact and integration into every facet of our daily lives is limited due to various reasons.
1 code implementation • 23 Jan 2024 • Noy Uzrad, Oren Barkan, Almog Elharar, Shlomi Shvartzman, Moshe Laufer, Lior Wolf, Noam Koenigstein
We introduce an open-source platform that comprises DiffMoog and an end-to-end sound matching framework.
1 code implementation • ICCV 2023 • Oren Barkan, Yehonatan Elisha, Yuval Asher, Amit Eshel, Noam Koenigstein
We introduce Iterated Integrated Attributions (IIA) - a generic method for explaining the predictions of vision models.
1 code implementation • 25 Oct 2023 • Oren Barkan, Yuval Asher, Amit Eshel, Yehonatan Elisha, Noam Koenigstein
We present Learning to Explain (LTX), a model-agnostic framework designed for providing post-hoc explanations for vision models.
1 code implementation • 23 Oct 2023 • Oren Barkan, Yehonatan Elisha, Jonathan Weill, Yuval Asher, Amit Eshel, Noam Koenigstein
This paper presents Deep Integrated Explanations (DIX) - a universal method for explaining vision models.
1 code implementation • ICCV 2023 • Oren Barkan, Tal Reiss, Jonathan Weill, Ori Katz, Roy Hirsch, Itzik Malkiel, Noam Koenigstein
Given an image of a certain object, the goal of VSD is to retrieve images of different objects with high perceptual visual similarity.
no code implementations • 28 Jun 2023 • Oren Barkan, Avi Caciularu, Idan Rejwan, Ori Katz, Jonathan Weill, Itzik Malkiel, Noam Koenigstein
We present Variational Bayesian Network (VBN) - a novel Bayesian entity representation learning model that utilizes hierarchical and relational side information and is particularly useful for modeling entities in the ``long-tail'', where the data is scarce.
no code implementations • 9 Jun 2023 • Itzik Malkiel, Uri Alon, Yakir Yehuda, Shahar Keren, Oren Barkan, Royi Ronen, Noam Koenigstein
The online phase is applied to every call separately and scores the similarity between the transcripted conversation and the topic anchors found in the offline phase.
1 code implementation • 4 Feb 2023 • Nitzan Farhi, Noam Koenigstein, Yuval Shavitt
The absolute majority of software today is developed collaboratively using collaborative version control tools such as Git.
no code implementations • 13 Aug 2022 • Itzik Malkiel, Dvir Ginzburg, Oren Barkan, Avi Caciularu, Jonathan Weill, Noam Koenigstein
Recently, there has been growing interest in the ability of Transformer-based models to produce meaningful embeddings of text with several applications, such as text similarity.
no code implementations • 13 Aug 2022 • Itzik Malkiel, Dvir Ginzburg, Oren Barkan, Avi Caciularu, Yoni Weill, Noam Koenigstein
We present MetricBERT, a BERT-based model that learns to embed text under a well-defined similarity metric while simultaneously adhering to the ``traditional'' masked-language task.
no code implementations • 23 Apr 2022 • Oren Barkan, Edan Hauon, Avi Caciularu, Ori Katz, Itzik Malkiel, Omri Armstrong, Noam Koenigstein
Transformer-based language models significantly advanced the state-of-the-art in many linguistic tasks.
no code implementations • 12 Dec 2021 • Oren Barkan, Roy Hirsch, Ori Katz, Avi Caciularu, Jonathan Weill, Noam Koenigstein
Next, we propose a novel hybrid recommendation algorithm that bridges these two conflicting objectives and enables a harmonized balance between preserving high accuracy for warm items while effectively promoting completely cold items.
no code implementations • 2 Sep 2021 • Oren Barkan, Omri Armstrong, Amir Hertz, Avi Caciularu, Ori Katz, Itzik Malkiel, Noam Koenigstein
The algorithmic advantages of GAM are explained in detail, and validated empirically, where it is shown that GAM outperforms its alternatives across various tasks and datasets.
1 code implementation • Findings (ACL) 2021 • Dvir Ginzburg, Itzik Malkiel, Oren Barkan, Avi Caciularu, Noam Koenigstein
Hence, we introduce SDR, a self-supervised method for document similarity that can be applied to documents of arbitrary length.
1 code implementation • 16 Nov 2020 • Oren Barkan, Jonathan Benchimol, Itamar Caspi, Eliya Cohen, Allon Hammer, Noam Koenigstein
We present a hierarchical architecture based on Recurrent Neural Networks (RNNs) for predicting disaggregated inflation components of the Consumer Price Index (CPI).
no code implementations • Findings of the Association for Computational Linguistics 2020 • Itzik Malkiel, Oren Barkan, Avi Caciularu, Noam Razin, Ori Katz, Noam Koenigstein
In addition, we introduce a new language understanding task for wine recommendations using similarities based on professional wine reviews.
no code implementations • ACL 2020 • Oren Barkan, Idan Rejwan, Avi Caciularu, Noam Koenigstein
BHWR facilitates Variational Bayes word representation learning combined with semantic taxonomy modeling via hierarchical priors.
no code implementations • 12 Mar 2020 • Dor Bank, Noam Koenigstein, Raja Giryes
An autoencoder is a specific type of a neural network, which is mainly designed to encode the input into a compressed and meaningful representation, and then decode it back such that the reconstructed input is similar as possible to the original one.
no code implementations • 18 Feb 2020 • Oren Barkan, Ori Katz, Noam Koenigstein
An important problem in multiview representation learning is finding the optimal combination of views with respect to the specific task at hand.
no code implementations • 15 Feb 2020 • Oren Barkan, Avi Caciularu, Ori Katz, Noam Koenigstein
However, it is possible that a certain early movie may become suddenly more relevant in the presence of a popular sequel movie.
1 code implementation • 14 Aug 2019 • Oren Barkan, Noam Razin, Itzik Malkiel, Ori Katz, Avi Caciularu, Noam Koenigstein
In this paper, we introduce Distilled Sentence Embedding (DSE) - a model that is based on knowledge distillation from cross-attentive models, focusing on sentence-pair tasks.
2 code implementations • 15 Dec 2018 • Oren Barkan, David Tsiris, Ori Katz, Noam Koenigstein
Sound synthesis is a complex field that requires domain expertise.
no code implementations • 1 Nov 2016 • Oren Barkan, Noam Koenigstein, Eylon Yogev, Ori Katz
In Recommender Systems research, algorithms are often characterized as either Collaborative Filtering (CF) or Content Based (CB).
no code implementations • 15 Aug 2016 • Mike Gartrell, Ulrich Paquet, Noam Koenigstein
Determinantal point processes (DPPs) are an elegant model for encoding probabilities over subsets, such as shopping baskets, of a ground set, such as an item catalog.
7 code implementations • 14 Mar 2016 • Oren Barkan, Noam Koenigstein
Many Collaborative Filtering (CF) algorithms are item-based in the sense that they analyze item-item relations in order to produce item similarities.
1 code implementation • 17 Feb 2016 • Mike Gartrell, Ulrich Paquet, Noam Koenigstein
In this work we present a new method for learning the DPP kernel from observed data using a low-rank factorization of this kernel.
no code implementations • 9 Sep 2014 • Ulrich Paquet, Noam Koenigstein, Ole Winther
We present a novel, scalable and Bayesian approach to modelling the occurrence of pairs of symbols (i, j) drawn from a large vocabulary.
no code implementations • 26 Sep 2013 • Ulrich Paquet, Noam Koenigstein
The bane of one-class collaborative filtering is interpreting and modelling the latent signal from the missing class.