Search Results for author: Noam Koenigstein

Found 21 papers, 5 papers with code

Detecting Security Patches via Behavioral Data in Code Repositories

no code implementations4 Feb 2023 Nitzan Farhi, Noam Koenigstein, Yuval Shavitt

The absolute majority of software today is developed collaboratively using collaborative version control tools such as Git.

MetricBERT: Text Representation Learning via Self-Supervised Triplet Training

no code implementations13 Aug 2022 Itzik Malkiel, Dvir Ginzburg, Oren Barkan, Avi Caciularu, Yoni Weill, Noam Koenigstein

We present MetricBERT, a BERT-based model that learns to embed text under a well-defined similarity metric while simultaneously adhering to the ``traditional'' masked-language task.

Representation Learning

Interpreting BERT-based Text Similarity via Activation and Saliency Maps

no code implementations13 Aug 2022 Itzik Malkiel, Dvir Ginzburg, Oren Barkan, Avi Caciularu, Jonathan Weill, Noam Koenigstein

Recently, there has been growing interest in the ability of Transformer-based models to produce meaningful embeddings of text with several applications, such as text similarity.

text similarity

Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps

no code implementations23 Apr 2022 Oren Barkan, Edan Hauon, Avi Caciularu, Ori Katz, Itzik Malkiel, Omri Armstrong, Noam Koenigstein

Transformer-based language models significantly advanced the state-of-the-art in many linguistic tasks.

Cold Item Integration in Deep Hybrid Recommenders via Tunable Stochastic Gates

no code implementations12 Dec 2021 Oren Barkan, Roy Hirsch, Ori Katz, Avi Caciularu, Jonathan Weill, Noam Koenigstein

Next, we propose a novel hybrid recommendation algorithm that bridges these two conflicting objectives and enables a harmonized balance between preserving high accuracy for warm items while effectively promoting completely cold items.

Collaborative Filtering

GAM: Explainable Visual Similarity and Classification via Gradient Activation Maps

no code implementations2 Sep 2021 Oren Barkan, Omri Armstrong, Amir Hertz, Avi Caciularu, Ori Katz, Itzik Malkiel, Noam Koenigstein

The algorithmic advantages of GAM are explained in detail, and validated empirically, where it is shown that GAM outperforms its alternatives across various tasks and datasets.


Forecasting CPI Inflation Components with Hierarchical Recurrent Neural Networks

1 code implementation16 Nov 2020 Oren Barkan, Jonathan Benchimol, Itamar Caspi, Eliya Cohen, Allon Hammer, Noam Koenigstein

We present a hierarchical architecture based on Recurrent Neural Networks (RNNs) for predicting disaggregated inflation components of the Consumer Price Index (CPI).

Bayesian Hierarchical Words Representation Learning

no code implementations ACL 2020 Oren Barkan, Idan Rejwan, Avi Caciularu, Noam Koenigstein

BHWR facilitates Variational Bayes word representation learning combined with semantic taxonomy modeling via hierarchical priors.

Representation Learning


no code implementations12 Mar 2020 Dor Bank, Noam Koenigstein, Raja Giryes

An autoencoder is a specific type of a neural network, which is mainly designed to encode the input into a compressed and meaningful representation, and then decode it back such that the reconstructed input is similar as possible to the original one.

Neural Attentive Multiview Machines

no code implementations18 Feb 2020 Oren Barkan, Ori Katz, Noam Koenigstein

An important problem in multiview representation learning is finding the optimal combination of views with respect to the specific task at hand.

Representation Learning

Attentive Item2Vec: Neural Attentive User Representations

no code implementations15 Feb 2020 Oren Barkan, Avi Caciularu, Ori Katz, Noam Koenigstein

However, it is possible that a certain early movie may become suddenly more relevant in the presence of a popular sequel movie.

Recommendation Systems

Scalable Attentive Sentence-Pair Modeling via Distilled Sentence Embedding

1 code implementation14 Aug 2019 Oren Barkan, Noam Razin, Itzik Malkiel, Ori Katz, Avi Caciularu, Noam Koenigstein

In this paper, we introduce Distilled Sentence Embedding (DSE) - a model that is based on knowledge distillation from cross-attentive models, focusing on sentence-pair tasks.

Knowledge Distillation Natural Language Understanding +3

The Bayesian Low-Rank Determinantal Point Process Mixture Model

no code implementations15 Aug 2016 Mike Gartrell, Ulrich Paquet, Noam Koenigstein

Determinantal point processes (DPPs) are an elegant model for encoding probabilities over subsets, such as shopping baskets, of a ground set, such as an item catalog.

Point Processes Product Recommendation

Item2Vec: Neural Item Embedding for Collaborative Filtering

7 code implementations14 Mar 2016 Oren Barkan, Noam Koenigstein

Many Collaborative Filtering (CF) algorithms are item-based in the sense that they analyze item-item relations in order to produce item similarities.

Collaborative Filtering

Low-Rank Factorization of Determinantal Point Processes for Recommendation

1 code implementation17 Feb 2016 Mike Gartrell, Ulrich Paquet, Noam Koenigstein

In this work we present a new method for learning the DPP kernel from observed data using a low-rank factorization of this kernel.

Point Processes Product Recommendation

Scalable Bayesian Modelling of Paired Symbols

no code implementations9 Sep 2014 Ulrich Paquet, Noam Koenigstein, Ole Winther

We present a novel, scalable and Bayesian approach to modelling the occurrence of pairs of symbols (i, j) drawn from a large vocabulary.

One-class Collaborative Filtering with Random Graphs: Annotated Version

no code implementations26 Sep 2013 Ulrich Paquet, Noam Koenigstein

The bane of one-class collaborative filtering is interpreting and modelling the latent signal from the missing class.

Collaborative Filtering Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.