Search Results for author: Oren Barkan

Found 20 papers, 4 papers with code

Within-Between Lexical Relation Classification

no code implementations EMNLP 2020 Oren Barkan, Avi Caciularu, Ido Dagan

We propose the novel \textit{Within-Between} Relation model for recognizing lexical-semantic relations between words.

Classification General Classification +1

Interpreting BERT-based Text Similarity via Activation and Saliency Maps

no code implementations13 Aug 2022 Itzik Malkiel, Dvir Ginzburg, Oren Barkan, Avi Caciularu, Jonathan Weill, Noam Koenigstein

Recently, there has been growing interest in the ability of Transformer-based models to produce meaningful embeddings of text with several applications, such as text similarity.

text similarity

MetricBERT: Text Representation Learning via Self-Supervised Triplet Training

no code implementations13 Aug 2022 Itzik Malkiel, Dvir Ginzburg, Oren Barkan, Avi Caciularu, Yoni Weill, Noam Koenigstein

We present MetricBERT, a BERT-based model that learns to embed text under a well-defined similarity metric while simultaneously adhering to the ``traditional'' masked-language task.

Representation Learning

Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps

no code implementations23 Apr 2022 Oren Barkan, Edan Hauon, Avi Caciularu, Ori Katz, Itzik Malkiel, Omri Armstrong, Noam Koenigstein

Transformer-based language models significantly advanced the state-of-the-art in many linguistic tasks.

Cold Item Integration in Deep Hybrid Recommenders via Tunable Stochastic Gates

no code implementations12 Dec 2021 Oren Barkan, Roy Hirsch, Ori Katz, Avi Caciularu, Jonathan Weill, Noam Koenigstein

Next, we propose a novel hybrid recommendation algorithm that bridges these two conflicting objectives and enables a harmonized balance between preserving high accuracy for warm items while effectively promoting completely cold items.

Collaborative Filtering

GAM: Explainable Visual Similarity and Classification via Gradient Activation Maps

no code implementations2 Sep 2021 Oren Barkan, Omri Armstrong, Amir Hertz, Avi Caciularu, Ori Katz, Itzik Malkiel, Noam Koenigstein

The algorithmic advantages of GAM are explained in detail, and validated empirically, where it is shown that GAM outperforms its alternatives across various tasks and datasets.


Forecasting CPI Inflation Components with Hierarchical Recurrent Neural Networks

1 code implementation16 Nov 2020 Oren Barkan, Jonathan Benchimol, Itamar Caspi, Eliya Cohen, Allon Hammer, Noam Koenigstein

We present a hierarchical architecture based on Recurrent Neural Networks (RNNs) for predicting disaggregated inflation components of the Consumer Price Index (CPI).

Bayesian Hierarchical Words Representation Learning

no code implementations ACL 2020 Oren Barkan, Idan Rejwan, Avi Caciularu, Noam Koenigstein

BHWR facilitates Variational Bayes word representation learning combined with semantic taxonomy modeling via hierarchical priors.

Representation Learning

Neural Attentive Multiview Machines

no code implementations18 Feb 2020 Oren Barkan, Ori Katz, Noam Koenigstein

An important problem in multiview representation learning is finding the optimal combination of views with respect to the specific task at hand.

Representation Learning

Attentive Item2Vec: Neural Attentive User Representations

no code implementations15 Feb 2020 Oren Barkan, Avi Caciularu, Ori Katz, Noam Koenigstein

However, it is possible that a certain early movie may become suddenly more relevant in the presence of a popular sequel movie.

Recommendation Systems

Multiscale Self Attentive Convolutions for Vision and Language Modeling

no code implementations3 Dec 2019 Oren Barkan

Self attention mechanisms have become a key building block in many state-of-the-art language understanding models.

Language Modelling

Scalable Attentive Sentence-Pair Modeling via Distilled Sentence Embedding

1 code implementation14 Aug 2019 Oren Barkan, Noam Razin, Itzik Malkiel, Ori Katz, Avi Caciularu, Noam Koenigstein

In this paper, we introduce Distilled Sentence Embedding (DSE) - a model that is based on knowledge distillation from cross-attentive models, focusing on sentence-pair tasks.

Knowledge Distillation Natural Language Understanding +3

Bayesian Neural Word Embedding

no code implementations21 Mar 2016 Oren Barkan

Recently, several works in the domain of natural language processing presented successful methods for word embedding.

Item2Vec: Neural Item Embedding for Collaborative Filtering

7 code implementations14 Mar 2016 Oren Barkan, Noam Koenigstein

Many Collaborative Filtering (CF) algorithms are item-based in the sense that they analyze item-item relations in order to produce item similarities.

Collaborative Filtering

Gaussian Process Regression for Out-of-Sample Extension

no code implementations7 Mar 2016 Oren Barkan, Jonathan Weill, Amir Averbuch

Many of the existing methods produce a low dimensional representation that attempts to describe the intrinsic geometric structure of the original data.


Adaptive Compressed Tomography Sensing

no code implementations CVPR 2013 Oren Barkan, Jonathan Weill, Amir Averbuch, Shai Dekel

One of the main challenges in Computed Tomography (CT) is how to balance between the amount of radiation the patient is exposed to during scan time and the quality of the CT image.

Computed Tomography (CT)

Cannot find the paper you are looking for? You can Submit a new open access paper.