Search Results for author: Oren Barkan

Found 28 papers, 10 papers with code

Item2Vec: Neural Item Embedding for Collaborative Filtering

7 code implementations14 Mar 2016 Oren Barkan, Noam Koenigstein

Many Collaborative Filtering (CF) algorithms are item-based in the sense that they analyze item-item relations in order to produce item similarities.

Collaborative Filtering

DiffMoog: a Differentiable Modular Synthesizer for Sound Matching

1 code implementation23 Jan 2024 Noy Uzrad, Oren Barkan, Almog Elharar, Shlomi Shvartzman, Moshe Laufer, Lior Wolf, Noam Koenigstein

We introduce an open-source platform that comprises DiffMoog and an end-to-end sound matching framework.

Audio Synthesis

Scalable Attentive Sentence-Pair Modeling via Distilled Sentence Embedding

1 code implementation14 Aug 2019 Oren Barkan, Noam Razin, Itzik Malkiel, Ori Katz, Avi Caciularu, Noam Koenigstein

In this paper, we introduce Distilled Sentence Embedding (DSE) - a model that is based on knowledge distillation from cross-attentive models, focusing on sentence-pair tasks.

Knowledge Distillation Natural Language Understanding +4

Forecasting CPI Inflation Components with Hierarchical Recurrent Neural Networks

1 code implementation16 Nov 2020 Oren Barkan, Jonathan Benchimol, Itamar Caspi, Eliya Cohen, Allon Hammer, Noam Koenigstein

We present a hierarchical architecture based on Recurrent Neural Networks (RNNs) for predicting disaggregated inflation components of the Consumer Price Index (CPI).

Visual Explanations via Iterated Integrated Attributions

1 code implementation ICCV 2023 Oren Barkan, Yehonatan Elisha, Yuval Asher, Amit Eshel, Noam Koenigstein

We introduce Iterated Integrated Attributions (IIA) - a generic method for explaining the predictions of vision models.

Learning to Explain: A Model-Agnostic Framework for Explaining Black Box Models

1 code implementation25 Oct 2023 Oren Barkan, Yuval Asher, Amit Eshel, Yehonatan Elisha, Noam Koenigstein

We present Learning to Explain (LTX), a model-agnostic framework designed for providing post-hoc explanations for vision models.

counterfactual

Deep Integrated Explanations

1 code implementation23 Oct 2023 Oren Barkan, Yehonatan Elisha, Jonathan Weill, Yuval Asher, Amit Eshel, Noam Koenigstein

This paper presents Deep Integrated Explanations (DIX) - a universal method for explaining vision models.

Bayesian Neural Word Embedding

no code implementations21 Mar 2016 Oren Barkan

Recently, several works in the domain of natural language processing presented successful methods for word embedding.

Gaussian Process Regression for Out-of-Sample Extension

no code implementations7 Mar 2016 Oren Barkan, Jonathan Weill, Amir Averbuch

Many of the existing methods produce a low dimensional representation that attempts to describe the intrinsic geometric structure of the original data.

regression

Adaptive Compressed Tomography Sensing

no code implementations CVPR 2013 Oren Barkan, Jonathan Weill, Amir Averbuch, Shai Dekel

One of the main challenges in Computed Tomography (CT) is how to balance between the amount of radiation the patient is exposed to during scan time and the quality of the CT image.

Computed Tomography (CT)

Multiscale Self Attentive Convolutions for Vision and Language Modeling

no code implementations3 Dec 2019 Oren Barkan

Self attention mechanisms have become a key building block in many state-of-the-art language understanding models.

Language Modelling

Attentive Item2Vec: Neural Attentive User Representations

no code implementations15 Feb 2020 Oren Barkan, Avi Caciularu, Ori Katz, Noam Koenigstein

However, it is possible that a certain early movie may become suddenly more relevant in the presence of a popular sequel movie.

Recommendation Systems

Neural Attentive Multiview Machines

no code implementations18 Feb 2020 Oren Barkan, Ori Katz, Noam Koenigstein

An important problem in multiview representation learning is finding the optimal combination of views with respect to the specific task at hand.

Representation Learning

Bayesian Hierarchical Words Representation Learning

no code implementations ACL 2020 Oren Barkan, Idan Rejwan, Avi Caciularu, Noam Koenigstein

BHWR facilitates Variational Bayes word representation learning combined with semantic taxonomy modeling via hierarchical priors.

Representation Learning

Within-Between Lexical Relation Classification

no code implementations EMNLP 2020 Oren Barkan, Avi Caciularu, Ido Dagan

We propose the novel \textit{Within-Between} Relation model for recognizing lexical-semantic relations between words.

Classification General Classification +2

GAM: Explainable Visual Similarity and Classification via Gradient Activation Maps

no code implementations2 Sep 2021 Oren Barkan, Omri Armstrong, Amir Hertz, Avi Caciularu, Ori Katz, Itzik Malkiel, Noam Koenigstein

The algorithmic advantages of GAM are explained in detail, and validated empirically, where it is shown that GAM outperforms its alternatives across various tasks and datasets.

Classification

Cold Item Integration in Deep Hybrid Recommenders via Tunable Stochastic Gates

no code implementations12 Dec 2021 Oren Barkan, Roy Hirsch, Ori Katz, Avi Caciularu, Jonathan Weill, Noam Koenigstein

Next, we propose a novel hybrid recommendation algorithm that bridges these two conflicting objectives and enables a harmonized balance between preserving high accuracy for warm items while effectively promoting completely cold items.

Collaborative Filtering

Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps

no code implementations23 Apr 2022 Oren Barkan, Edan Hauon, Avi Caciularu, Ori Katz, Itzik Malkiel, Omri Armstrong, Noam Koenigstein

Transformer-based language models significantly advanced the state-of-the-art in many linguistic tasks.

Interpreting BERT-based Text Similarity via Activation and Saliency Maps

no code implementations13 Aug 2022 Itzik Malkiel, Dvir Ginzburg, Oren Barkan, Avi Caciularu, Jonathan Weill, Noam Koenigstein

Recently, there has been growing interest in the ability of Transformer-based models to produce meaningful embeddings of text with several applications, such as text similarity.

text similarity

MetricBERT: Text Representation Learning via Self-Supervised Triplet Training

no code implementations13 Aug 2022 Itzik Malkiel, Dvir Ginzburg, Oren Barkan, Avi Caciularu, Yoni Weill, Noam Koenigstein

We present MetricBERT, a BERT-based model that learns to embed text under a well-defined similarity metric while simultaneously adhering to the ``traditional'' masked-language task.

Representation Learning

GPT-Calls: Enhancing Call Segmentation and Tagging by Generating Synthetic Conversations via Large Language Models

no code implementations9 Jun 2023 Itzik Malkiel, Uri Alon, Yakir Yehuda, Shahar Keren, Oren Barkan, Royi Ronen, Noam Koenigstein

The online phase is applied to every call separately and scores the similarity between the transcripted conversation and the topic anchors found in the offline phase.

Segmentation TAG

Representation Learning via Variational Bayesian Networks

no code implementations28 Jun 2023 Oren Barkan, Avi Caciularu, Idan Rejwan, Ori Katz, Jonathan Weill, Itzik Malkiel, Noam Koenigstein

We present Variational Bayesian Network (VBN) - a novel Bayesian entity representation learning model that utilizes hierarchical and relational side information and is particularly useful for modeling entities in the ``long-tail'', where the data is scarce.

Bayesian Inference Representation Learning

In Search of Truth: An Interrogation Approach to Hallucination Detection

1 code implementation5 Mar 2024 Yakir Yehuda, Itzik Malkiel, Oren Barkan, Jonathan Weill, Royi Ronen, Noam Koenigstein

Despite the many advances of Large Language Models (LLMs) and their unprecedented rapid evolution, their impact and integration into every facet of our daily lives is limited due to various reasons.

Hallucination

Cannot find the paper you are looking for? You can Submit a new open access paper.